16 research outputs found

    Deep Photo Style Transfer

    Full text link
    This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits

    MiR-429 suppresses the progression and metastasis of osteosarcoma by targeting ZEB1

    Get PDF
    MiR-429 functions as a tumor suppressor and has been observed in multiple types of cancer, but the effects and mechanisms of miR-429 in osteosarcoma are poorly understood. This study is performed to evaluate the functions of miR-429 in the progression of osteosarcoma. Firstly, the miR-429 expression in osteosarcoma tissues and oste- osarcoma cells was detected using real time PCR, and the relationship between miR-429 expression and overall survival of osteosarcoma was analyzed. Secondly, the effects of miR-429 on the migration, invasion, proliferation and apoptosis of osteosarcoma cells were evaluated using transwell assay, wound-healing assay, CCK-8 assay and flow cytometry, respectively. Proteins related to epithelial-mesenchymal transition (EMT), E-cadherin, Vimentin, N-cadherin and Snail, were also detected using Western blot. Finally, the target gene of miR-429 in osteosarcoma was predicted and verified using dual luciferase assay and the expression correlation between them was analyzed using Pearson’s correlation. MiR-429 was down-regulated in osteosarcoma tissues and osteosarcoma cells; the expression level of miR-429 was associated with the prognosis of osteosarcoma. High level of miR-429 in osteo- sarcoma cells significantly suppressed the migration, invasion and proliferation of cells but induced cells apopto- sis. Furthermore, high level of miR-429 in osteosarcoma cells obviously increased the expression of E-cadherin protein but decreased the expression of Vimentin, N-Cadherin and Snail proteins. EMT inducer ZEB1 was the target gene of miR-429 and the expression of ZEB1 was negatively related to the miR-429 expression in osteosar- coma. In conclusion, miR-429 may functions as a tumor suppressor and be down-regulated in osteosarcoma. MiR- 429 may suppress the progression and metastasis of osteosarcoma by down-regulating the ZEB1 expression

    I2^2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs

    Full text link
    In this work, we present I2^2-SDF, a new method for intrinsic indoor scene reconstruction and editing using differentiable Monte Carlo raytracing on neural signed distance fields (SDFs). Our holistic neural SDF-based framework jointly recovers the underlying shapes, incident radiance and materials from multi-view images. We introduce a novel bubble loss for fine-grained small objects and error-guided adaptive sampling scheme to largely improve the reconstruction quality on large-scale indoor scenes. Further, we propose to decompose the neural radiance field into spatially-varying material of the scene as a neural field through surface-based, differentiable Monte Carlo raytracing and emitter semantic segmentations, which enables physically based and photorealistic scene relighting and editing applications. Through a number of qualitative and quantitative experiments, we demonstrate the superior quality of our method on indoor scene reconstruction, novel view synthesis, and scene editing compared to state-of-the-art baselines.Comment: Accepted by CVPR 202

    DMV3D: Denoising Multi-View Diffusion using 3D Large Reconstruction Model

    Full text link
    We propose \textbf{DMV3D}, a novel 3D generation approach that uses a transformer-based 3D large reconstruction model to denoise multi-view diffusion. Our reconstruction model incorporates a triplane NeRF representation and can denoise noisy multi-view images via NeRF reconstruction and rendering, achieving single-stage 3D generation in ∼\sim30s on single A100 GPU. We train \textbf{DMV3D} on large-scale multi-view image datasets of highly diverse objects using only image reconstruction losses, without accessing 3D assets. We demonstrate state-of-the-art results for the single-image reconstruction problem where probabilistic modeling of unseen object parts is required for generating diverse reconstructions with sharp textures. We also show high-quality text-to-3D generation results outperforming previous 3D diffusion models. Our project website is at: https://justimyhxu.github.io/projects/dmv3d/ .Comment: Project Page: https://justimyhxu.github.io/projects/dmv3d

    Forward and Inverse Rendering with Gradient-based Optimization

    Full text link
    199 pagesPhotorealistic rendering is a fundamental component of computer graphics. To render photorealistic 2D images that faithfully represent our 3D world, efforts have been devoted into developing compact scientific models and efficient algorithms that are capable of simulating lighting and physics to capture appearance using the geometry, reflectance and illumination. Inverse Rendering, on the other hand, focuses on recovering unknown scene attributes from photograph measurements by optimization, typically through analysis-by-synthesis technique, or more recently, differentiable rendering. In this thesis, we propose solutions to a few such challenging forward and inverse graphics problems, as described below. The first work is a forward rendering (SIGGRAPH 2020) approach to speeding up Markov chain Monte Carlo (MCMC) rendering with derivatives from recent rendering engines that support end-to-end differentiation. In this work, we build upon Langevin dynamics and propose a suite of Langevin Monte Carlo algorithms with gradient-based adaptation for efficient photorealistic rendering of scenes with complex light transport effects, such as caustics, interreflections, and occlusions. In inverse rendering, we develop multiple solutions for recovering material and shape. In our second work (ICCP 2020) we focus on learning-based inverse subsurface scattering. Given images of transcluent objects, of unknown shape and lighting, we aim to use learning to infer the optical parameters controlling subsurface scattering of light inside the objects. With physics-based priors in the learning framework, we obtain strong improvements in both parameter estimation accuracy and appearance reproduction compared to traditional networks. Our third work is a unified shape and appearance reconstruction framework (EGSR 2021) using differentiable rendering for 3D object scanning. We tackle this problem by introducing a new analysis-by-synthesis technique capable of producing high-quality reconstructions through robust coarse-to-fine optimization and physics-based differentiable rendering. We demonstrate the effectiveness of our method on real-world objects captured with handheld cameras that outperforms previous state-of-the-art approaches. In Appendix A, we tackle the challenge of automatically generating procedural representation of fiber-based yarn models for cloth rendering (SIGGRAPH 2016, EGSR 2017). Appendix B has slightly deviated from rendering and focuses on image style transfer techniques (CVPR 2017, EGSR 2018). With recent advances in physics-based differentiable rendering, we have taken first steps to speeding up MCMC forward rendering with first-order gradients, improving learning-based inverse subsurface scattering, and introducing an end-to-end differentiable rendering pipeline for high-quality handheld object scanning. As for next steps, a promising future direction for computer graphics would be learning-based neural rendering methods that aim for AR/VR-based applications. Exploring differentiable rendering for improving traditional 3D reconstruction would be another promising topic, thanks to the theoretical and practical breakthroughs in the corresponding field

    Fitting procedural yarn models for realistic cloth rendering

    No full text
    corecore